Laws of robotics

Laws of Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.

Contents

Isaac Asimov's "Three Laws of Robotics"

The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Adaptations and extensions exist based upon this framework. As of 2011 they remain a "fictional device".[1]

EPRSC / AHRC principles of robotics

In 2011 the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC) of Great Britain jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high level messages" intended to be conveyed, based on a September 2010 research workshop:[2][3][1]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

The messages intended to be conveyed were:

  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice hurts us all.
  3. Addressing obvious public concerns will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research we should work with experts from other disciplines including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available.
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.

See also

References

  1. ^ a b Stewart, Jon (2011-10-03). "Ready for the robot revolution?". BBC News. http://www.bbc.co.uk/news/technology-15146053. Retrieved 2011-10-03. 
  2. ^ "Principles of robotics: Regulating Robots in the Real World". Engineering and Physical Sciences Research Council. http://www.epsrc.ac.uk/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx. Retrieved 2011-10-03. 
  3. ^ Winfield, Alan. "Five roboethical principles – for humans". New Scientist. http://www.newscientist.com/article/mg21028111.100-five-roboethical-principles--for-humans.htm. Retrieved 2011-10-03.